Current Issue : April-June Volume : 2022 Issue Number : 2 Articles : 5 Articles
User interface design patterns are acknowledged as a standard solution to recurring design problems. The heterogeneity of existing design patterns makes the selection of relevant ones difficult. To tackle these concerns, the current work contributes in a twofold manner. The first contribution is the development of a recommender system for selecting the most relevant design patterns in the Human Computer Interaction (HCI) domain. This system introduces a hybrid approach that combines text-based and ontology-based techniques and is aimed at using semantic similarity along with ontology models to retrieve appropriate HCI design patterns. The second contribution addresses the validation of the proposed recommender system regarding the acceptance intention towards our system by assessing the perceived experience and the perceived accuracy. To this purpose, we conducted a user-centric evaluation experiment wherein participants were invited to fill pre-study and post-test questionnaires. The findings of the evaluation study revealed that the perceived experience of the proposed system’s quality and the accuracy of the recommended design patterns were assessed positively....
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods....
)is paper describes the construction of an electronic system that can recognise twelve manual motions made by an interlocutor with one of their hands in a situation with regulated lighting and background in real time. Hand rotations, translations, and scale changes in the camera plane are all supported by the implemented system. )e system requires an Analog Devices ADSP BF-533 Ez-Kit Lite evaluation card. As a last stage in the development process, displaying a letter associated with a recognized gesture is advised. However, a visual representation of the suggested algorithm may be found in the visual toolbox of a personal computer. Individuals who are deaf or hard of hearing will communicate with the general population thanks to new technology that connects them to computers. )is technology is being used to create new applications....
This paper proposes a novel lightweight visual perception system with Incremental Learning (IL), tailored to child–robot interaction scenarios. Specifically, this encompasses both an action and emotion recognition module, with the former wrapped around an IL system, allowing novel actions to be easily added. This IL system enables the tutor aspiring to use robotic agents in interaction scenarios to further customize the system according to children’s needs. We perform extensive evaluations of the developed modules, achieving state-of-the-art results on both the children’s action BabyRobot dataset and the children’s emotion EmoReact dataset. Finally, we demonstrate the robustness and effectiveness of the IL system for action recognition by conducting a thorough experimental analysis for various conditions and parameters....
To establish a safe human–robot interaction in collaborative agricultural environments, a field experiment was performed, acquiring data from wearable sensors placed at five different body locations on 20 participants. The human–robot collaborative task presented in this study involved six well-defined continuous sub-activities, which were executed under several variants to capture, as much as possible, the different ways in which someone can carry out certain synergistic actions in the field. The obtained dataset was made publicly accessible, thus enabling future meta-studies for machine learning models focusing on human activity recognition, and ergonomics aiming to identify the main risk factors for possible injuries....
Loading....